120 research outputs found

    Does the world need a scientific society for research on how to improve healthcare?

    Get PDF
    In this editorial, we reflect on the arguments for starting a scientific society focused on research on how to improve healthcare. This society would take an inclusive approach to what constitutes healthcare. For instance, it should include mental health healthcare, treatment for substance abuse, the work of allied health professions, and preventive healthcare. The society would be open to researchers from all traditions. Thus, we take an inclusive approach to what constitutes scientific research, as long as it uses rigorous methods, is focused on improving healthcare, and aims at knowledge that can be transferred across settings. The society would primarily target scientific researchers but would invite others with an interest in this area of research, regardless of their discipline, position, field of application, or group affiliation (e.g., improvement science, behavioral medicine, knowledge translation). A society would need fruitful collaboration with related societies and organizations, which may include having combined meetings. Special links may be developed with one or more journals. A website to provide information on relevant resources, events, and training opportunities is another key activity. It would also provide a voice for the field at funding agencies, political arenas, and similar institutions. An organizational structure and financial resources are required to develop and run these activities. Our aim is to start an international debate, to discover if we can establish a shared vision across academics and stakeholders engaged with creating scientific knowledge on how to improve healthcare. We invite readers to express their views in the online questionnaire accessed by following the URL link provided at the end of the editorial

    What do evidence-based secondary journals tell us about the publication of clinically important articles in primary healthcare journals?

    Get PDF
    BACKGROUND: We conducted this analysis to determine i) which journals publish high-quality, clinically relevant studies in internal medicine, general/family practice, general practice nursing, and mental health; and ii) the proportion of clinically relevant articles in each journal. METHODS: We performed an analytic survey of a hand search of 170 general medicine, general healthcare, and specialty journals for 2000. Research staff assessed individual articles by using explicit criteria for scientific merit for healthcare application. Practitioners assessed the clinical importance of these articles. Outcome measures were the number of high-quality, clinically relevant studies published in the 170 journal titles and how many of these were published in each of four discipline-specific, secondary "evidence-based" journals (ACP Journal Club for internal medicine and its subspecialties; Evidence-Based Medicine for general/family practice; Evidence-Based Nursing for general practice nursing; and Evidence-Based Mental Health for all aspects of mental health). Original studies and review articles were classified for purpose: therapy and prevention, screening and diagnosis, prognosis, etiology and harm, economics and cost, clinical prediction guides, and qualitative studies. RESULTS: We evaluated 60,352 articles from 170 journal titles. The pass criteria of high-quality methods and clinically relevant material were met by 3059 original articles and 1073 review articles. For ACP Journal Club (internal medicine), four titles supplied 56.5% of the articles and 27 titles supplied the other 43.5%. For Evidence-Based Medicine (general/family practice), five titles supplied 50.7% of the articles and 40 titles supplied the remaining 49.3%. For Evidence-Based Nursing (general practice nursing), seven titles supplied 51.0% of the articles and 34 additional titles supplied 49.0%. For Evidence-Based Mental Health (mental health), nine titles supplied 53.2% of the articles and 34 additional titles supplied 46.8%. For the disciplines of internal medicine, general/family practice, and mental health (but not general practice nursing), the number of clinically important articles was correlated withScience Citation Index (SCI) Impact Factors. CONCLUSIONS: Although many clinical journals publish high-quality, clinically relevant and important original studies and systematic reviews, the articles for each discipline studied were concentrated in a small subset of journals. This subset varied according to healthcare discipline; however, many of the important articles for all disciplines in this study were published in broad-based healthcare journals rather than subspecialty or discipline-specific journals

    What differences are detected by superiority trials or ruled out by noninferiority trials? A cross-sectional study on a random sample of two-hundred two-arms parallel group randomized clinical trials

    Get PDF
    BACKGROUND: The smallest difference to be detected in superiority trials or the largest difference to be ruled out in noninferiority trials is a key determinant of sample size, but little guidance exists to help researchers in their choice. The objectives were to examine the distribution of differences that researchers aim to detect in clinical trials and to verify that those differences are smaller in noninferiority compared to superiority trials. METHODS: Cross-sectional study based on a random sample of two hundred two-arm, parallel group superiority (100) and noninferiority (100) randomized clinical trials published between 2004 and 2009 in 27 leading medical journals. The main outcome measure was the smallest difference in favor of the new treatment to be detected (superiority trials) or largest unfavorable difference to be ruled out (noninferiority trials) used for sample size computation, expressed as standardized difference in proportions, or standardized difference in means. Student t test and analysis of variance were used. RESULTS: The differences to be detected or ruled out varied considerably from one study to the next; e.g., for superiority trials, the standardized difference in means ranged from 0.007 to 0.87, and the standardized difference in proportions from 0.04 to 1.56. On average, superiority trials were designed to detect larger differences than noninferiority trials (standardized difference in proportions: mean 0.37 versus 0.27, P = 0.001; standardized difference in means: 0.56 versus 0.40, P = 0.006). Standardized differences were lower for mortality than for other outcomes, and lower in cardiovascular trials than in other research areas. CONCLUSIONS: Superiority trials are designed to detect larger differences than noninferiority trials are designed to rule out. The variability between studies is considerable and is partly explained by the type of outcome and the medical context. A more explicit and rational approach to choosing the difference to be detected or to be ruled out in clinical trials may be desirable

    What differences are detected by superiority trials or ruled out by noninferiority trials? A cross-sectional study on a random sample of two-hundred two-arms parallel group randomized clinical trials

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The smallest difference to be detected in superiority trials or the largest difference to be ruled out in noninferiority trials is a key determinant of sample size, but little guidance exists to help researchers in their choice. The objectives were to examine the distribution of differences that researchers aim to detect in clinical trials and to verify that those differences are smaller in noninferiority compared to superiority trials.</p> <p>Methods</p> <p>Cross-sectional study based on a random sample of two hundred two-arm, parallel group superiority (100) and noninferiority (100) randomized clinical trials published between 2004 and 2009 in 27 leading medical journals. The main outcome measure was the smallest difference in favor of the new treatment to be detected (superiority trials) or largest unfavorable difference to be ruled out (noninferiority trials) used for sample size computation, expressed as standardized difference in proportions, or standardized difference in means. Student t test and analysis of variance were used.</p> <p>Results</p> <p>The differences to be detected or ruled out varied considerably from one study to the next; e.g., for superiority trials, the standardized difference in means ranged from 0.007 to 0.87, and the standardized difference in proportions from 0.04 to 1.56. On average, superiority trials were designed to detect larger differences than noninferiority trials (standardized difference in proportions: mean 0.37 versus 0.27, <it>P </it>= 0.001; standardized difference in means: 0.56 versus 0.40, <it>P </it>= 0.006). Standardized differences were lower for mortality than for other outcomes, and lower in cardiovascular trials than in other research areas.</p> <p>Conclusions</p> <p>Superiority trials are designed to detect larger differences than noninferiority trials are designed to rule out. The variability between studies is considerable and is partly explained by the type of outcome and the medical context. A more explicit and rational approach to choosing the difference to be detected or to be ruled out in clinical trials may be desirable.</p

    Electronic search strategies to identify reports of cluster randomized trials in MEDLINE: low precision will improve with adherence to reporting standards

    Get PDF
    BACKGROUND: Cluster randomized trials (CRTs) present unique methodological and ethical challenges. Researchers conducting systematic reviews of CRTs (e.g., addressing methodological or ethical issues) require efficient electronic search strategies (filters or hedges) to identify trials in electronic databases such as MEDLINE. According to the CONSORT statement extension to CRTs, the clustered design should be clearly identified in titles or abstracts; however, variability in terminology may make electronic identification challenging. Our objectives were to (a) evaluate sensitivity ( recall ) and precision of a well-known electronic search strategy ( randomized controlled trial as publication type) with respect to identifying CRTs, (b) evaluate the feasibility of new search strategies targeted specifically at CRTs, and (c) determine whether CRTs are appropriately identified in titles or abstracts of reports and whether there has been improvement over time. METHODS: We manually examined a wide range of health journals to identify a gold standard set of CRTs. Search strategies were evaluated against the gold standard set, as well as an independent set of CRTs included in previous systematic reviews. RESULTS: The existing strategy (randomized controlled trial.pt) is sensitive (93.8%) for identifying CRTs, but has relatively low precision (9%, number needed to read 11); the number needed to read can be halved to 5 (precision 18.4%) by combining with cluster design-related terms using the Boolean operator AND; combining with the Boolean operator OR maximizes sensitivity (99.4%) but would require 28.6 citations read to identify one CRT. Only about 50% of CRTs are clearly identified as cluster randomized in titles or abstracts; approximately 25% can be identified based on the reported units of randomization but are not amenable to electronic searching; the remaining 25% cannot be identified except through manual inspection of the full-text article. The proportion of trials clearly identified has increased from 28% between the years 2000-2003, to 60% between 2004-2007 (absolute increase 32%, 95% CI 17 to 47%). CONCLUSIONS: CRTs should include the phrase cluster randomized trial in titles or abstracts; this will facilitate more accurate indexing of the publication type by reviewers at the National Library of Medicine, and efficient textword retrieval of the subset employing cluster randomization

    Use of the Internet for health information by physicians for patient care in a teaching hospital in Ibadan, Nigeria

    Get PDF
    BACKGROUND: The Internet is the world's largest network of information, communication and services. Although the Internet is widely used in medicine and has made significant impact in research, training and patient care, few studies had explored the extent to which Nigerian physicians use Internet resources for patient care. The objective of this study was to assess physicians' use of the Internet for health information for patient care. METHOD: 172 physicians at the University College hospital (UCH) Ibadan, Nigeria; completed a 31-item, anonymous, standardized questionnaire. The Epi-Info software was used for data analysis. RESULTS: The mean age of the respondents was 31.95 years (SD 4.94). Virtually all (98%) the respondents had used the Internet; 76% accessed it from cyber cafes. E-mail was the most commonly used Internet service (64%). Ninety percent of the respondents reported they had obtained information from the Internet for patient care; of this number, 76.2% had searched a database. The database most recently searched was MEDLINE/PubMed in 99% of cases. Only 7% of the respondents had ever searched the Cochrane Library. More than half (58.1%) perceived they had no confidence to download full-text articles from online sources such as the Health Internetwork Access to Research Initiative (HINARI). Multiple barriers to increased use of the Internet were identified including poor availability of broadband (fast connection speed) Internet access, lack of information searching skills, cost of access and information overload. CONCLUSION: Physicians' use of the Internet for health information for patient care was widespread but use of evidenced-based medicine resources such as Cochrane Library, Up-to-date and Clinical Evidence was minimal. Awareness and training in the use of EBM resources for patient care is needed. Introduction of EBM in the teaching curriculum will enhance the use of EBM resources by physicians for patient care

    A web-based library consult service for evidence-based medicine: Technical development

    Get PDF
    BACKGROUND: Incorporating evidence based medicine (EBM) into clinical practice requires clinicians to learn to efficiently gain access to clinical evidence and effectively appraise its validity. Even using current electronic systems, selecting literature-based data to solve a single patient-related problem can require more time than practicing physicians or residents can spare. Clinical librarians, as informationists, are uniquely suited to assist physicians in this endeavor. RESULTS: To improve support for evidence-based practice, we have developed a web-based EBM library consult service application (LCS). Librarians use the LCS system to provide full text evidence-based literature with critical appraisal in response to a clinical question asked by a remote physician. LCS uses an entirely Free/Open Source Software platform and will be released under a Free Software license. In the first year of the LCS project, the software was successfully developed and a reference implementation put into active use. Two years of evaluation of the clinical, educational, and attitudinal impact on physician-users and librarian staff are underway, and expected to lead to refinement and wide dissemination of the system. CONCLUSION: A web-based EBM library consult model may provide a useful way for informationists to assist clinicians, and is feasible to implement

    Feature engineering and a proposed decision-support system for systematic reviewers of medical evidence

    Get PDF
    Objectives: Evidence-based medicine depends on the timely synthesis of research findings. An important source of synthesized evidence resides in systematic reviews. However, a bottleneck in review production involves dual screening of citations with titles and abstracts to find eligible studies. For this research, we tested the effect of various kinds of textual information (features) on performance of a machine learning classifier. Based on our findings, we propose an automated system to reduce screeing burden, as well as offer quality assurance. Methods: We built a database of citations from 5 systematic reviews that varied with respect to domain, topic, and sponsor. Consensus judgments regarding eligibility were inferred from published reports. We extracted 5 feature sets from citations: alphabetic, alphanumeric +, indexing, features mapped to concepts in systematic reviews, and topic models. To simulate a two-person team, we divided the data into random halves. We optimized the parameters of a Bayesian classifier, then trained and tested models on alternate data halves. Overall, we conducted 50 independent tests. Results: All tests of summary performance (mean F3) surpassed the corresponding baseline, P<0.0001. The ranks for mean F3, precision, and classification error were statistically different across feature sets averaged over reviews; P-values for Friedman's test were .045, .002, and .002, respectively. Differences in ranks for mean recall were not statistically significant. Alphanumeric+ features were associated with best performance; mean reduction in screening burden for this feature type ranged from 88% to 98% for the second pass through citations and from 38% to 48% overall. Conclusions: A computer-assisted, decision support system based on our methods could substantially reduce the burden of screening citations for systematic review teams and solo reviewers. Additionally, such a system could deliver quality assurance both by confirming concordant decisions and by naming studies associated with discordant decisions for further consideration. © 2014 Bekhuis et al
    • …
    corecore